In this paper, we present a study of regret and its expression on social media platforms. Specifically, we present a novel dataset of Reddit texts that have been classified into three classes: Regret by Action, Regret by Inaction, and No Regret. We then use this dataset to investigate the language used to express regret on Reddit and to identify the domains of text that are most commonly associated with regret. Our findings show that Reddit users are most likely to express regret for past actions, particularly in the domain of relationships. We also found that deep learning models using GloVe embedding outperformed other models in all experiments, indicating the effectiveness of GloVe for representing the meaning and context of words in the domain of regret. Overall, our study provides valuable insights into the nature and prevalence of regret on social media, as well as the potential of deep learning and word embeddings for analyzing and understanding emotional language in online text. These findings have implications for the development of natural language processing algorithms and the design of social media platforms that support emotional expression and communication.
translated by 谷歌翻译
Hope is characterized as openness of spirit toward the future, a desire, expectation, and wish for something to happen or to be true that remarkably affects human's state of mind, emotions, behaviors, and decisions. Hope is usually associated with concepts of desired expectations and possibility/probability concerning the future. Despite its importance, hope has rarely been studied as a social media analysis task. This paper presents a hope speech dataset that classifies each tweet first into "Hope" and "Not Hope", then into three fine-grained hope categories: "Generalized Hope", "Realistic Hope", and "Unrealistic Hope" (along with "Not Hope"). English tweets in the first half of 2022 were collected to build this dataset. Furthermore, we describe our annotation process and guidelines in detail and discuss the challenges of classifying hope and the limitations of the existing hope speech detection corpora. In addition, we reported several baselines based on different learning approaches, such as traditional machine learning, deep learning, and transformers, to benchmark our dataset. We evaluated our baselines using weighted-averaged and macro-averaged F1-scores. Observations show that a strict process for annotator selection and detailed annotation guidelines enhanced the dataset's quality. This strict annotation process resulted in promising performance for simple machine learning classifiers with only bi-grams; however, binary and multiclass hope speech detection results reveal that contextual embedding models have higher performance in this dataset.
translated by 谷歌翻译
由于免费的在线百科全书具有大量内容,因此Wikipedia和Wikidata是许多自然语言处理(NLP)任务的关键,例如信息检索,知识基础构建,机器翻译,文本分类和文本摘要。在本文中,我们介绍了Wikides,这是一个新颖的数据集,用于为文本摘要问题提供Wikipedia文章的简短描述。该数据集由6987个主题上的80K英语样本组成。我们设置了一种两阶段的摘要方法 - 描述生成(I阶段)和候选排名(II阶段)作为一种依赖于转移和对比学习的强大方法。对于描述生成,与其他小规模的预训练模型相比,T5和BART表现出了优越性。通过将对比度学习与Beam Search的不同输入一起应用,基于度量的排名模型优于直接描述生成模型,在主题独立拆分和独立于主题的独立拆分中,最高可达22个胭脂。此外,第II期中的结果描述得到了人类评估的支持,其中45.33%以上,而I阶段的23.66%则支持针对黄金描述。在情感分析方面,生成的描述无法有效地从段落中捕获所有情感极性,同时从黄金描述中更好地完成此任务。自动产生的新描述减少了人类为创建它们的努力,并丰富了基于Wikidata的知识图。我们的论文对Wikipedia和Wikidata产生了实际影响,因为有成千上万的描述。最后,我们预计Wikides将成为从短段落中捕获显着信息的相关作品的有用数据集。策划的数据集可公开可用:https://github.com/declare-lab/wikides。
translated by 谷歌翻译
本概述论文描述了乌尔都语语言中的假新闻检测的第一个共享任务。该任务是作为二进制分类任务的,目标是区分真实新闻和虚假新闻。我们提供了一个数据集,分为900个注释的新闻文章,用于培训,并进行了400篇新闻文章进行测试。该数据集包含五个领域的新闻:(i)健康,(ii)体育,(iii)Showbiz,(iv)技术和(v)业务。来自6个不同国家(印度,中国,埃及,德国,巴基斯坦和英国)的42个团队登记了这项任务。9个团队提交了他们的实验结果。参与者使用了各种机器学习方法,从基于功能的传统机器学习到神经网络技术。最佳性能系统的F得分值为0.90,表明基于BERT的方法优于其他机器学习技术
translated by 谷歌翻译
随着社交媒体平台影响的增长,滥用的影响变得越来越有影响力。自动检测威胁和滥用语言的重要性不能高估。但是,大多数现有的研究和最先进的方法都以英语为目标语言,对低资产品语言的工作有限。在本文中,我们介绍了乌尔都语的两项滥用和威胁性语言检测的任务,该任务在全球范围内拥有超过1.7亿扬声器。两者都被视为二进制分类任务,其中需要参与系统将乌尔都语中的推文分类为两个类别,即:(i)第一个任务的滥用和不滥用,以及(ii)第二次威胁和不威胁。我们提供两个手动注释的数据集,其中包含标有(i)滥用和非虐待的推文,以及(ii)威胁和无威胁。滥用数据集在火车零件中包含2400个注释的推文,测试部分中包含1100个注释的推文。威胁数据集在火车部分中包含6000个注释的推文,测试部分中包含3950个注释的推文。我们还为这两个任务提供了逻辑回归和基于BERT的基线分类器。在这项共同的任务中,来自六个国家的21个团队注册参加了参与(印度,巴基斯坦,中国,马来西亚,阿拉伯联合酋长国和台湾),有10个团队提交了子任务A的奔跑,这是虐待语言检测,9个团队提交了他们的奔跑对于正在威胁语言检测的子任务B,七个团队提交了技术报告。最佳性能系统达到子任务A的F1得分值为0.880,子任务为0.545。对于两个子任务,基于M-Bert的变压器模型都表现出最佳性能。
translated by 谷歌翻译
这项研究报告了第二个名为Urdufake@Fire2021的共享任务,以识别乌尔都语语言的假新闻检测。这是一个二进制分类问题,在其中,任务是将给定的新闻文章分为两类:(i)真实新闻,或(ii)假新闻。在这项共同的任务中,来自7个不同国家(中国,埃及,以色列,印度,墨西哥,巴基斯坦和阿联酋)的34个团队注册参加了共同的任务,18个团队提交了他们的实验结果,11个团队提交了他们的技术报告。所提出的系统基于各种基于计数的功能,并使用了不同的分类器以及神经网络体系结构。随机梯度下降(SGD)算法的表现优于其他分类器,并达到0.679 F-SCORE。
translated by 谷歌翻译
在当代世界中,自动检测假新闻是一项非常重要的任务。这项研究报告了第二项共享任务,称为Urdufake@fire2021,以识别乌尔都语中的假新闻检测。共同任务的目的是激励社区提出解决这一至关重要问题的有效方法,尤其是对于乌尔都语。该任务被视为二进制分类问题,将给定的新闻文章标记为真实或假新闻文章。组织者提供了一个数据集,其中包括五个领域的新闻:(i)健康,(ii)体育,(iii)Showbiz,(iv)技术和(v)业务,分为培训和测试集。该培训集包含1300篇注释的新闻文章 - 750个真实新闻,550个假新闻,而测试集包含300篇新闻文章 - 200个真实,100个假新闻。来自7个不同国家(中国,埃及,以色列,印度,墨西哥,巴基斯坦和阿联酋)的34个团队注册参加了Urdufake@Fire2021共享任务。在这些情况下,有18个团队提交了实验结果,其中11个提交了技术报告,与2020年的Urdufake共享任务相比,这一报告要高得多,当时只有6个团队提交了技术报告。参与者提交的技术报告展示了不同的数据表示技术,从基于计数的弓形功能到单词矢量嵌入以及使用众多的机器学习算法,从传统的SVM到各种神经网络体系结构,包括伯特和罗伯塔等变形金刚。在今年的比赛中,表现最佳的系统获得了0.679的F1-MACRO得分,低于过去一年的0.907 F1-MaCro的最佳结果。诚然,尽管过去和当前几年的培训集在很大程度上重叠,但如果今年完全不同,则测试集。
translated by 谷歌翻译
鉴于当前全球的社交距离限制,大多数人现在使用社交媒体作为其主要交流媒介。因此,数百万患有精神疾病的人被孤立了,他们无法亲自获得帮助。他们越来越依赖在线场地,以表达自己并寻求有关处理精神障碍的建议。根据世界卫生组织(WHO)的说法,大约有4.5亿人受到影响。精神疾病(例如抑郁,焦虑等)非常普遍,并影响了个体的身体健康。最近提出了人工智能(AI)方法,以帮助基于患者的真实信息(例如,医疗记录,行为数据,社交媒体利用等),包括精神病医生和心理学家在内的心理健康提供者。 AI创新表明,在从计算机视觉到医疗保健的众多现实应用应用程序中,主要执行。这项研究分析了REDDIT平台上的非结构化用户数据,并分类了五种常见的精神疾病:抑郁,焦虑,双相情感障碍,ADHD和PTSD。我们培训了传统的机器学习,深度学习和转移学习多级模型,以检测个人的精神障碍。这项工作将通过自动化检测过程并告知适当当局需要紧急援助的人来使公共卫生系统受益。
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
Charisma is considered as one's ability to attract and potentially also influence others. Clearly, there can be considerable interest from an artificial intelligence's (AI) perspective to provide it with such skill. Beyond, a plethora of use cases opens up for computational measurement of human charisma, such as for tutoring humans in the acquisition of charisma, mediating human-to-human conversation, or identifying charismatic individuals in big social data. A number of models exist that base charisma on various dimensions, often following the idea that charisma is given if someone could and would help others. Examples include influence (could help) and affability (would help) in scientific studies or power (could help), presence, and warmth (both would help) as a popular concept. Modelling high levels in these dimensions for humanoid robots or virtual agents, seems accomplishable. Beyond, also automatic measurement appears quite feasible with the recent advances in the related fields of Affective Computing and Social Signal Processing. Here, we, thereforem present a blueprint for building machines that can appear charismatic, but also analyse the charisma of others. To this end, we first provide the psychological perspective including different models of charisma and behavioural cues of it. We then switch to conversational charisma in spoken language as an exemplary modality that is essential for human-human and human-computer conversations. The computational perspective then deals with the recognition and generation of charismatic behaviour by AI. This includes an overview of the state of play in the field and the aforementioned blueprint. We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.
translated by 谷歌翻译